A blind or blinded experiment is a scientific experiment where some of the persons involved are prevented from knowing certain information that might lead to conscious or unconscious bias on their part, invalidating the results.
For example, when asking consumers to compare the tastes of different brands of a product, the identities of the latter should be concealed — otherwise consumers will generally tend to prefer the brand they are familiar with. Similarly, when evaluating the effectiveness of a medical drug, both the patients and the doctors who administer the drug may be kept in the dark about the dosage being applied in each case — to forestall any chance of a placebo effect, observer bias, or conscious deception.
Blinding can be imposed on researchers, technicians, subjects, funders, or any combination of them. The opposite of a blind trial is an open trial. Blind experiments are an important tool of the scientific method, in many fields of research — from medicine, forensics, psychology and the social sciences, to basic sciences such as physics and biology and to market research. In some disciplines, such as drug testing, blind experiments are considered essential.
The terms blind (adjective) or to blind (transitive verb) when used in this sense are figurative extensions of the literal idea of blindfolding someone. The terms masked or to mask may be used for the same concept. (This is commonly the case in ophthalmology, where the word 'blind' is often used in the literal sense.)
Contents |
One of the earliest suggestions that a blinded approach to experiments would be valuable came from Claude Bernard, who recommended that any scientific experiment be split between the theorist who conceives the experiment and a naive (and preferably uneducated) observer who registers the results without foreknowledge of the theory or hypothesis being tested. This suggestion contrasted starkly with the prevalent Enlightenment-era attitude that scientific observation can only be objectively valid when undertaken by a well-educated, informed scientist.[1]
Single-blind describes experiments where information that could introduce bias or otherwise skew the result is withheld from the participants, but the experimenter will be in full possession of the facts.
In a single-blind experiment, the individual subjects do not know whether they are so-called "test" subjects or members of an "experimental control" group. Single-blind experimental design is used where the experimenters either must know the full facts (for example, when comparing sham to real surgery) and so the experimenters cannot themselves be blind, or where the experimenters will not introduce further bias and so the experimenters need not be blind. However, there is a risk that subjects are influenced by interaction with the researchers — known as the experimenter's bias. Single-blind trials are especially risky in psychology and social science research, where the experimenter has an expectation of what the outcome should be, and may consciously or subconsciously influence the behavior of the subject.
A classic example of a single-blind test is the "Pepsi challenge." A marketing person prepares several cups of cola labeled "A" and "B". One set of cups has Pepsi, the others have Coca-Cola. The marketing person knows which soda is in which cup but is not supposed to reveal that information to the subjects. Volunteer subjects are encouraged to try the two cups of soda and polled for which ones they prefer. The problem with a single-blind test like this is the marketing person can give (unintentional or not) subconscious cues which bias the volunteer. In addition it's possible the marketing person could prepare the separate sodas differently (more ice in one cup, push one cup in front of the volunteer, etc.) which can cause a bias. If the marketing person is employed by the company which is producing the challenge there's always the possibility of a conflict of interests where the marketing person is aware that future income will be based on the results of the test.
Double-blind describes an especially stringent way of conducting an experiment, usually on human subjects, in an attempt to eliminate subjective bias on the part of both experimental subjects and the experimenters. In most cases, double-blind experiments are held to achieve a higher standard of scientific rigor.
In a double-blind experiment, neither the individuals nor the researchers know who belongs to the control group and the experimental group. Only after all the data have been recorded (and in some cases, analyzed) do the researchers learn which individuals are which. Performing an experiment in double-blind fashion is a way to lessen the influence of the prejudices and unintentional physical cues on the results (the placebo effect, observer bias, and experimenter's bias). Random assignment of the subject to the experimental or control group is a critical part of double-blind research design. The key that identifies the subjects and which group they belonged to is kept by a third party and not given to the researchers until the study is over.
Double-blind methods can be applied to any experimental situation where there is the possibility that the results will be affected by conscious or unconscious bias on the part of the experimenter.
Computer-controlled experiments are sometimes also erroneously referred to as double-blind experiments, since software may not cause the type of direct bias between researcher and subject. Development of surveys presented to subjects through computers shows that bias can easily be built into the process. Voting systems are also examples where bias can easily be constructed into an apparently simple machine based system. In analogy to the human researcher described above, the part of the software that provides interaction with the human is presented to the subject as the blinded researcher, while the part of the software that defines the key is the third party. An example is the ABX test, where the human subject has to identify an unknown stimulus X as being either A or B.
Double-blinding is relatively easy to achieve in drug studies, by formulating the investigational drug and the control (either a placebo or an established drug) to have identical appearance (color, taste, etc.). Patients are randomly assigned to the control or experimental group and given random numbers by a study coordinator, who also encodes the drugs with matching random numbers. Neither the patients nor the researchers monitoring the outcome know which patient is receiving which treatment, until the study is over and the random code is broken.
Effective blinding can be difficult to achieve where the treatment is notably effective (indeed, studies have been suspended in cases where the tested drug combinations were so effective that it was deemed unethical to continue withholding the findings from the control group, and the general population),[2][3] or where the treatment is very distinctive in taste or has unusual side-effects that allow the researcher and/or the subject to guess which group they were assigned to. It is also difficult to use the double blind method to compare surgical and non-surgical interventions (although sham surgery, involving a simple incision, might be ethically permitted). A good clinical protocol will foresee these potential problems to ensure blinding is as effective as possible. It has also been argued[4] that even in a doubly-blind experiment, general attitudes of the experimenter such as skepticism or enthusiasm towards the tested procedure can be subconsciously transferred to the test subjects.
Evidence-based medicine practitioners prefer blinded randomised controlled trials (RCTs), where that is a possible experimental design. These are high on the hierarchy of evidence; only a meta analysis of several well designed RCTs is considered more reliable.
Modern nuclear physics and particle physics experiments often involve large numbers of data analysts working together to extract quantitative data from complex datasets. In particular, the analysts want to report accurate systematic error estimates for all of their measurements; this is difficult or impossible if one of the errors is observer bias. To remove this bias, the experimenters devise blind analysis techniques, where the experimental result is hidden from the analysts until they've agreed—based on properties of the data set other than the final value—that the analysis techniques are fixed.
One example of a blind analysis occurs in neutrino experiments, like the Sudbury Neutrino Observatory, where the experimenters wish to report the total number N of neutrinos seen. The experimenters have preexisting expectations about what this number should be, and these expectations must not be allowed to bias the analysis. Therefore, the experimenters are allowed to see an unknown fraction f of the dataset. They use these data to understand the backgrounds, signal-detection efficiencies, detector resolutions, etc.. However, since no one knows the "blinding fraction" f, no one has preexisting expectations about the meaningless neutrino count N' = N x f in the visible data; therefore, the analysis does not introduce any bias into the final number N which is reported. Another blinding scheme is used in B meson analyses in experiments like BaBar and CDF; here, the crucial experimental parameter is a correlation between certain particle energies and decay times—which require an extremely complex and painstaking analysis—and particle charge signs, which are fairly trivial to measure. Analysts are allowed to work with all of the energy and decay data, but are forbidden from seeing the sign of the charge, and thus are unable to see the correlation (if any). At the end of the experiment, the correct charge signs are revealed; the analysis software is run once (with no subjective human intervention), and the resulting numbers are published. Searches for rare events, like electron neutrinos in MiniBooNE or proton decay in Super-Kamiokande, require a different class of blinding schemes.
The "hidden" part of the experiment—the fraction f for SNO, the charge-sign database for CDF—is usually called the "blindness box". At the end of the analysis period, one is allowed to "unblind the data" and "open the box".
In a police photo lineup, an officer shows a group of photos to a witness or crime victim and asks him or her to pick out the suspect. This is basically a single-blind test of the witness' memory, and may be subject to subtle or overt influence by the officer. There is a growing movement in law enforcement to move to a double blind procedure in which the officer who shows the photos to the witness does not know which photo is of the suspect.[5][6]
|
|
|